67 research outputs found

    Über Zeitverarbeitung in der MSO : Modellierung der neuronalen Prozesse in der medialen superioren Olive

    Get PDF
    Die vorliegende Arbeit beschĂ€ftigt sich mit der Modellierung der neuronalen Prozesse, die auditorischen Lokalisationsleistungen zugrunde liegen. Viele der hierzu aktuell diskutierten Modellvorstellungen lassen sich auf ein von L. Jeffress bereits in der Mitte des letzten Jahrhunderts vorgeschlagenes Netzwerkmodell zurĂŒckfĂŒhren: Nach Jeffress werden interaurale Laufzeitunterschiede (ITDs) zwischen beiden auditorischen Pfaden in einem Netzwerk von Detektorneuronen (Koinzidenzdetektoren) ausgewertet. Systematische Laufzeitunterschiede resultieren aus der Architektur des Netzwerks, die sogenannte Delay-Lines realisieren soll. Trotz einer Reihe von Evidenzen fĂŒr das im auditorischen Diskurs inzwischen als Paradigma geltende Modell, findet Kritik am Jeffress-Modell in jĂŒngerer Zeit zunehmend Beachtung und Interesse. So argumentieren B. Grothe und D. McAlpine gegen die Übertragung des Delay-Line Modells auf die VerhĂ€ltnisse bei SĂ€ugern. Zentrales Moment ihrer Kritik ist eine Afferenz der MSO aus einem weiteren Teilgebiet der Olive (MNTB). Wesentlicher Effekt der von der Projektion gebildeten inhibitorischen Synapse ist eine relative Verschiebung der Best-Delays der MSO-Zellen zur PrĂ€ferenz contralateraler Delays. Damit besteht nicht nur zu der nach dem Jeffress-Modell notwendigen Aufteilung der Best-Delays ein Widerspruch, die ITDs liegen bei tiefen Frequenzen fĂŒr kleine SĂ€uger aufgrund deren geringer KopfgrĂ¶ĂŸe außerhalb des Bereichs physiologisch auftretender Delays. In dieser Arbeit werden die Ergebnisse von Grothe und McAlpine durch Compartmental Modeling analysiert. GegenĂŒber einer Simulationsstudie aus den Gruppen von Grothe und McAlpine werden von uns durch explizite Modellierung der Dendriten zusĂ€tzliche Effekte der Inhibiton beschrieben. Wir stellen dar, wie die Topographie von Inhibiton und Excitation die Verarbeitungsprozesse in Bipolar-Zellen durch dendritische Low-Pass Filterung und Kontrastverst Ă€rkung zwischen minimaler und maximaler Spikerate unterstĂŒtzt. Unsere Ergebnisse können die empirisch nachgewiesene Verteilung excitatorischer (distaler) und inhibitorische (proximaler) Synapsen erklĂ€ren. In der abschliessenden Analyse der von den Bipolar-Zellen generierten Spike Trains wird das von Grothe und McAlpine entworfene alternative ITD-Codierungsmodell auf der Basis von Ratencodes problematisiert: Bislang erklĂ€rt ihr Vorschlag nicht, wie organismische Lokalisationsleistungen auf der Basis weniger Spikes realisiert werden können

    RO-Crate community update 2024

    Get PDF
    Here we give an update of the community development and adoption of RO-Crate for FAIR Digital Object since FDO2022. It is notable that programmatic access and more detailed profiles have received high attention, as well as several FDO implementations that use RO-Crate

    Extending the Environment Ontology with Text-mined Habitat Mentions

    Get PDF
    Ontologies, i.e., formal specifications of concepts and relations relevant to a specialised domain of interest, are information resources which play a crucial role in the tasks of knowledge representation, management and discovery. Knowledge acquisition, the process of curating and updating them, is typically carried out manually, requiring human efforts that are tedious, time-consuming and expensive. This holds true especially in the case of ontologies which are continuously being expanded with new terms, in their aim to support a growing number of use cases. An example of such is the Environment Ontology (ENVO). Initially developed to support the annotation of metagenomic data, ENVO has more recently realigned its goals in support of the Sustainable Development Agenda for 2030 and thus is currently much broader in scope, covering the domains of biodiversity and ecology. As a result, there has been a dramatic increase with respect to ENVO’s number of classes; hence the process of curating and updating the ontology can benefit from automated support. In this work, we aim to help in expanding ENVO in a more efficient manner by automatically discovering new habitat mentions. To this end, we developed a text mining-based approach underpinned by the following pipeline: (1) automatic extraction of habitat mentions from text using named entity recognition methods; (2) normalisation of every extracted mention, i.e., identification of the most relevant ENVO term based on the calculation of lexical similarity between them; (3) application of a filter to retain only habitat mentions that appear to not yet exist in ENVO; and (4) construction of clusters over the remaining mentions. The pipeline results in clusters consisting of potential synonyms and lexical variations of existing terms, as well as semantically related expressions, which can then be evaluated for integration into an existing ENVO class, or, on occasion, be indicative of a new class that could be added to the ontology. Applying our approach to a corpus pertaining to the Dipterocarpaceae family of forest trees (based on documents from the Biological Heritage Library and grey literature), we generated more than 1,000 new habitat terms for potential incorporation into ENVO

    A choice of persistent identifier schemes for the Distributed System of Scientific Collections (DiSSCo)

    Get PDF
    Persistent identifiers (PID) to identify digital representations of physical specimens in natural science collections (i.e., digital specimens) unambiguously and uniquely on the Internet are one of the mechanisms for digitally transforming collections-based science. Digital Specimen PIDs contribute to building and maintaining long-term community trust in the accuracy and authenticity of the scientific data to be managed and presented by the Distributed System of Scientific Collections (DiSSCo) research infrastructure planned in Europe to commence implementation in 2024. Not only are such PIDs valid over the very long timescales common in the heritage sector but they can also transcend changes in underlying technologies of their implementation. They are part of the mechanism for widening access to natural science collections. DiSSCo technical experts previously selected the Handle System as the choice to meet core PID requirements. Using a two-step approach, this options appraisal captures, characterises and analyses different alternative Handle-based PID schemes and the possible operational modes of use. In a first step a weighting and ranking the options has been applied followed by a structured qualitative assessment of social and technical compliance across several assessment dimensions: levels of scalability, community trust, persistence, governance, appropriateness of the scheme and suitability for future global adoption. The results are discussed in relation to branding, community perceptions and global context to determine a preferred PID scheme for DiSSCo that also has potential for adoption and acceptance globally. DiSSCo will adopt a ‘driven-by DOI’ persistent identifier (PID) scheme customised with natural sciences community characteristics. Establishing a new Registration Agency in collaboration with the International DOI Foundation is a practical way forward to support the FAIR (findable, accessible interoperable, reusable) data architecture of DiSSCo research infrastructure. This approach is compatible with the policies of the European Open Science Cloud (EOSC) and is aligned to existing practices across the global community of natural science collections

    Machine learning as a service for DiSSCo’s digital specimen architecture

    Get PDF
    International mass digitization efforts through infrastructures like the European Distributed System of Scientific Collections (DiSSCo), the US resource for Digitization of Biodiversity Collections (iDigBio), the National Specimen Information Infrastructure (NSII) of China, and Australia’s digitization of National Research Collections (NRCA Digital) make geo- and biodiversity specimen data freely, fully and directly accessible. Complementary, overarching infrastructure initiatives like the European Open Science Cloud (EOSC) were established to enable mutual integration, interoperability and reusability of multidisciplinary data streams including biodiversity, Earth system and life sciences. Natural Science Collections (NSC) are of particular importance for such multidisciplinary and internationally linked infrastructures, since they provide hard scientific evidence by allowing direct traceability of derived data (e.g., images, sequences, measurements) to physical specimens and material samples in NSC. To open up the large amounts of trait and habitat data and to link these data to digital resources like sequence databases (e.g., ENA), taxonomic infrastructures (e.g., GBIF) or environmental repositories (e.g., PANGAEA), proper annotation of specimen data with rich (meta)data early in the digitization process is required, next to bridging technologies to facilitate the reuse of these data. This was addressed in recent studies, where we employed computational image processing and artificial intelligence technologies (Deep Learning) for the classification and extraction of features like organs and morphological traits from digitized collection data (with a focus on herbarium sheets). However, such applications of artificial intelligence are rarely—this applies both for (sub-symbolic) machine learning and (symbolic) ontology-based annotations—integrated in the workflows of NSC’s management systems, which are the essential repositories for the aforementioned integration of data streams. This was the motivation for the development of a Deep Learning-based trait extraction and coherent Digital Specimen (DS) annotation service providing “Machine learning as a Service” (MLaaS) with a special focus on interoperability with the core services of DiSSCo, notably the DS Repository (nsidr.org) and the Specimen Data Refinery, as well as reusability within the data fabric of EOSC. Taking up the use case to detect and classify regions of interest (ROI) on herbarium scans, we demonstrate a MLaaS prototype for DiSSCo involving the digital object framework, Cordra, for the management of DS as well as instant annotation of digital objects with extracted trait features (and ROIs) based on the DS specification openDS

    Extracting Trait Data from Digitized Herbarium Specimens Using Deep Convolutional Networks

    Get PDF
    Herbarium collections have been the foundation of taxonomical research for centuries and become increasingly important for related fields such as plant ecology or biogeography. Herbaria worldwide are estimated to include c. 400 million specimens, by inclusion of type specimens cover with few exceptions all known plant taxa (c. 350 000 species) and have a temporal dimension that is reached by only few other botanical data sources. Presently, c. 13.5 million digitized herbarium specimens are available online via institutional websites or aggregating websites like GBIF. We used these specimen images in combination with morphological trait data obtained from TRY and the FLOPO knowledge base in order to train deep convolutional networks to recognize these traits as well as phenological states from specimen images. To improve trait recognition, we expanded our approach to include high resolution scans to enable fine grain feature extraction. Furthermore we analyze differences in recognizability of traits depending on trait group (e.g. leaf traits) or higher taxa. Newly mobilized trait data will be used to improve our trait databases. Our approach is described in detail and performance in the recognition of different traits is analyzed and discussed

    Enriched biodiversity data as a resource and service

    Get PDF
    Background: Recent years have seen a surge in projects that produce large volumes of structured, machine-readable biodiversity data. To make these data amenable to processing by generic, open source “data enrichment” workflows, they are increasingly being represented in a variety of standards-compliant interchange formats. Here, we report on an initiative in which software developers and taxonomists came together to address the challenges and highlight the opportunities in the enrichment of such biodiversity data by engaging in intensive, collaborative software development: The Biodiversity Data Enrichment Hackathon. Results: The hackathon brought together 37 participants (including developers and taxonomists, i.e. scientific professionals that gather, identify, name and classify species) from 10 countries: Belgium, Bulgaria, Canada, Finland, Germany, Italy, the Netherlands, New Zealand, the UK, and the US. The participants brought expertise in processing structured data, text mining, development of ontologies, digital identification keys, geographic information systems, niche modeling, natural language processing, provenance annotation, semantic integration, taxonomic name resolution, web service interfaces, workflow tools and visualisation. Most use cases and exemplar data were provided by taxonomists. One goal of the meeting was to facilitate re-use and enhancement of biodiversity knowledge by a broad range of stakeholders, such as taxonomists, systematists, ecologists, niche modelers, informaticians and ontologists. The suggested use cases resulted in nine breakout groups addressing three main themes: i) mobilising heritage biodiversity knowledge; ii) formalising and linking concepts; and iii) addressing interoperability between service platforms. Another goal was to further foster a community of experts in biodiversity informatics and to build human links between research projects and institutions, in response to recent calls to further such integration in this research domain. Conclusions: Beyond deriving prototype solutions for each use case, areas of inadequacy were discussed and are being pursued further. It was striking how many possible applications for biodiversity data there were and how quickly solutions could be put together when the normal constraints to collaboration were broken down for a week. Conversely, mobilising biodiversity knowledge from their silos in heritage literature and natural history collections will continue to require formalisation of the concepts (and the links between them) that define the research domain, as well as increased interoperability between the software platforms that operate on these concepts
    • 

    corecore